109 research outputs found

    Building a self-adaptive content distribution network

    No full text

    Web crawlers on a health related portal: Detection, characterisation and implications

    Get PDF
    Web crawlers are automated computer programs that visit websites in order to download their content. They are employed for non-malicious (search engine crawlers indexing websites) and malicious purposes (those breaching privacy by harvesting email addresses for unsolicited email promotion and spam databases). Whatever their usage, web crawlers need to be accurately identified in an analysis of the overall traffic to a website. Visits from web crawlers as well as from genuine users are recorded in the web server logs. In this paper, we analyse the web server logs of NRIC, a health related portal. We present the techniques used to identify malicious and non-malicious web crawlers from these logs, using a blacklist database and analysis of the characteristics of the online behaviour of malicious crawlers. We use visualisation to carry out sanity checks along the crawler removal process. We illustrate the use of these techniques using 3 months of web server logs from NRIC. We use a combination of visualisation and baseline measures from Google Analytics to demonstrate the efficacy of our techniques. Finally, we discuss the implications of our work on the analysis of the web traffic to a website using web server logs and on the interpretation of the results from such analysis. © 2011 IEEE

    The patia autonomic webserver: Feasibility experimentation

    No full text

    Patia: Adaptive distributed webserver (A position paper)

    No full text
    This paper introduces the Patia Adaptive Webserver architecture, which is distributed and consists of semi-autonomous agents called FLYs. The FLY carries with it the set of rules and adaptivity policies required to deliver the data to the requesting client. Where a change in the FLY’s external environment could affect performance, it is the FLY’s responsibility to change the method of delivery (or the actual object being delivered). It is our conjecture that the success of today’s multimedia websites in terms of performance lies in the architecture of the underlying servers and their ability to adapt to changes in demand and resource availability, as well as their ability to scale. We believe that the distributed and autonomous nature of this system are key factors in achieving this.

    Modeling User Preferences in Recommender Systems: A Classification Framework for Explicit and Implicit User Feedback

    Get PDF
    Recommender systems are firmly established as a standard technology for assisting users with their choices; however, little attention has been paid to the application of the user model in recommender systems, particularly the variability and noise that are an intrinsic part of human behavior and activity. To enable recommender systems to suggest items that are useful to a particular user, it can be essential to understand the user and his or her interactions with the system. These interactions typically manifest themselves as explicit and implicit user feedback that provides the key indicators for modeling users' preferences for items and essential information for personalizing recommendations. In this article, we propose a classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, UserModel, Scale of Measurement, and Domain Relevance.We develop a set of comparison criteria for explicit and implicit user feedback to emphasize the key properties. Using our framework, we provide a classification of recommender systems that have addressed questions about user feedback, and we review state-of-the-art techniques to improve such user feedback and thereby improve the performance of the recommender system. Finally, we formulate challenges for future research on improvement of user feedback. © 2014 ACM

    Enabling Collaborative eHealth Research using Web 2.0 Tools

    Get PDF
    In this paper, we describe two Web 2.0 based systems designed to facilitate and enhance collaborative eHealth research activities. Using a combination of Forums, Wikis and connectivity to 3rd party social networking systems, we have designed systems to support collaborative document creation (including editing, reviewing and publication), dissemination of material to relevant communities, discussion of ideas, and sharing of opinions. The ECDC Field Epidemiology Manual Wiki and Medicine Support Unit Online Forums are presented herein, including an overview to the system architectures, and user interaction models. We present our planned methods of evaluation, focusing on the ability to measure successful and sustainable community involvement

    Evaluation of genetic diversity between 27 banana cultivars (Musa spp.) in Mauritius using RAPD markers

    Get PDF
    Cultivated bananas (Musa spp.) are mostly diploid or triploid cultivars with various combinations of the A and B genomes inherited from their diploid ancestors Musa acuminata Colla. and Musa balbisianaColla. respectively. Random amplified polymorphic DNA (RAPD) markers were used to establish the relatedness of 27 accessions in the Mauritian Musa germplasm. 15 decamer primers produced a total of115 reproducible amplification products, of which 96 were polymorphic. Computation of the genetic distances shows that similarities ranged from 0.3 to 1.0 with an average of 0.51. With a few exceptions,cluster analysis differentiated pure A containing cultivars from those containing at least one B genome. This paper answers long standing questions on the taxonomic placement of the cultivar ‘BananeRouge’ by providing the basis for its classification within the homogenomic A cultivars. The results presented here also contribute to narrowing the gaps in our current understanding of the migration path of bananas and the emergence of secondary centers of diversity

    A user-centred evaluation framework for the Sealife semantic web browsers

    Get PDF
    Background: Semantically-enriched browsing has enhanced the browsing experience by providing contextualised dynamically generated Web content, and quicker access to searched-for information. However, adoption of Semantic Web technologies is limited and user perception from the non-IT domain sceptical. Furthermore, little attention has been given to evaluating semantic browsers with real users to demonstrate the enhancements and obtain valuable feedback. The Sealife project investigates semantic browsing and its application to the life science domain. Sealife's main objective is to develop the notion of context-based information integration by extending three existing Semantic Web browsers (SWBs) to link the existing Web to the eScience infrastructure. / Methods: This paper describes a user-centred evaluation framework that was developed to evaluate the Sealife SWBs that elicited feedback on users' perceptions on ease of use and information findability. Three sources of data: i) web server logs; ii) user questionnaires; and iii) semi-structured interviews were analysed and comparisons made between each browser and a control system. / Results: It was found that the evaluation framework used successfully elicited users' perceptions of the three distinct SWBs. The results indicate that the browser with the most mature and polished interface was rated higher for usability, and semantic links were used by the users of all three browsers. / Conclusion: Confirmation or contradiction of our original hypotheses with relation to SWBs is detailed along with observations of implementation issues
    • …
    corecore